我们介绍了关于多语言信息访问(MIA)2022共享任务的研讨会的结果,评估了16种类型上多样性的语言中的跨语性开放回程答案(QA)系统。在此任务中,我们在14种类型上多样化的语言中调整了两个大规模的跨语性开放式质疑QA数据集,并使用了2种代表性不足的语言中的新注释的开放式QA数据:Tagalog和Tamil。四个团队提交了他们的系统。利用迭代开采的最佳系统是不同的负面示例和较大的预审慎模型达到32.2 F1,表现优于我们的基线4.5分。第二最佳系统使用实体感知的上下文化表示文档检索,并在泰米尔语(20.8 F1)方面取得了重大改进,而其他大多数系统的得分几乎为零。
translated by 谷歌翻译
我们提出了一种基于语境化嵌入的单词和实体的全局实体消除歧义(ED)模型。我们的模型基于BERT和培训我们的新培训任务,使模型能够捕获基于Word的本地和基于实体的全局上下文信息。该模型解决了ED作为序列决策任务,有效地使用两种类型的上下文信息。我们在五个标准ED数据集中实现了新的最先进结果:AIDA-CONLL,MSNBC,AQUAINT,ACE2004和WNED-Wiki。我们的源代码和培训的模型检查点可在https://github.com/studio-ousia/luke获得。
translated by 谷歌翻译
Large-scale vision-language models such as CLIP have shown impressive performance on zero-shot image classification and image-to-text retrieval. However, such zero-shot performance of CLIP-based models does not realize in tasks that require a finer-grained correspondence between vision and language, such as Visual Question Answering (VQA). We investigate why this is the case, and report an interesting phenomenon of CLIP, which we call the Concept Association Bias (CAB), as a potential cause of the difficulty of applying CLIP to VQA and similar tasks. CAB is especially apparent when two concepts are present in the given image while a text prompt only contains a single concept. In such a case, we find that CLIP tends to treat input as a bag of concepts and attempts to fill in the other missing concept crossmodally, leading to an unexpected zero-shot prediction. For example, when asked for the color of a lemon in an image, CLIP predicts ``purple'' if the image contains a lemon and an eggplant. We demonstrate the Concept Association Bias of CLIP by showing that CLIP's zero-shot classification performance greatly suffers when there is a strong concept association between an object (e.g. lemon) and an attribute (e.g. its color). On the other hand, when the association between object and attribute is weak, we do not see this phenomenon. Furthermore, we show that CAB is significantly mitigated when we enable CLIP to learn deeper structure across image and text embeddings by adding an additional Transformer on top of CLIP and fine-tuning it on VQA. We find that across such fine-tuned variants of CLIP, the strength of CAB in a model predicts how well it performs on VQA.
translated by 谷歌翻译
A practical issue of edge AI systems is that data distributions of trained dataset and deployed environment may differ due to noise and environmental changes over time. Such a phenomenon is known as a concept drift, and this gap degrades the performance of edge AI systems and may introduce system failures. To address this gap, a retraining of neural network models triggered by concept drift detection is a practical approach. However, since available compute resources are strictly limited in edge devices, in this paper we propose a lightweight concept drift detection method in cooperation with a recently proposed on-device learning technique of neural networks. In this case, both the neural network retraining and the proposed concept drift detection are done by sequential computation only to reduce computation cost and memory utilization. Evaluation results of the proposed approach shows that while the accuracy is decreased by 3.8%-4.3% compared to existing batch-based detection methods, it decreases the memory size by 88.9%-96.4% and the execution time by 1.3%-83.8%. As a result, the combination of the neural network retraining and the proposed concept drift detection method is demonstrated on Raspberry Pi Pico that has 264kB memory.
translated by 谷歌翻译
Out-of-distribution (OOD) detection has attracted a large amount of attention from the machine learning research community in recent years due to its importance in deployed systems. Most of the previous studies focused on the detection of OOD samples in the multi-class classification task. However, OOD detection in the multi-label classification task remains an underexplored domain. In this research, we propose YolOOD - a method that utilizes concepts from the object detection domain to perform OOD detection in the multi-label classification task. Object detection models have an inherent ability to distinguish between objects of interest (in-distribution) and irrelevant objects (e.g., OOD objects) on images that contain multiple objects from different categories. These abilities allow us to convert a regular object detection model into an image classifier with inherent OOD detection capabilities with just minor changes. We compare our approach to state-of-the-art OOD detection methods and demonstrate YolOOD's ability to outperform these methods on a comprehensive suite of in-distribution and OOD benchmark datasets.
translated by 谷歌翻译
研究过程包括许多决定,例如如何应有资格以及在何处发表论文。在本文中,我们介绍了一个一般框架,以调查此类决策的影响。研究效果的主要困难是我们需要了解反事实结果,而实际上并非现实。我们框架的主要见解是灵感来自现有的反事实分析,其中研究人员将双胞胎视为反事实单位。提出的框架将一对彼此引用为双胞胎的论文。这些论文往往是平行的作品,在类似的主题和类似社区中。我们调查了采用不同决策的双论文,观察这些研究带来的研究影响的进展,并通过这些研究的影响来估算决策的影响。我们发布了我们的代码和数据,我们认为由于数据集缺乏反事实研究,因此这是非常有益的。
translated by 谷歌翻译
三维(3D)医学图像的产生可能具有巨大的应用潜力,因为它考虑了3D解剖结构。但是,有两个问题可以防止有效培训3D医疗生成模型:(1)3D医学图像的获取和注释非常昂贵,导致培训图像不足,(2)大量参数是参与3D卷积。为了解决这两个问题,我们提出了一种名为3D Split&Shuffle-Gan的新型GAN模型。为了解决3D数据稀缺问题,我们首先使用丰富的图像切片预先培训二维(2D)GAN模型,并夸大2D卷积权重以改善3D GAN的初始化。为GAN模型的生成器和鉴别器提出了新型的3D网络体系结构,以显着减少参数的数量,同时保持图像生成的质量。研究了许多体重通胀策略和参数有效的3D架构。对心脏(Stanford Aimi冠状动脉钙)和大脑(阿尔茨海默氏病神经成像计划)的实验表明,所提出的方法会导致改善的3D图像产生质量,参数较少。
translated by 谷歌翻译
自我监督学习的最新发展使我们有可能进一步减少人类干预的多步管道中的干预,其中重点围绕着特定感兴趣的对象而发展。在本文中,焦点在组织病理学图像中的细胞核中放置。特别是,我们旨在以无监督的方式提取蜂窝信息,以完成下游任务。随着核以各种尺寸表现出来,我们提出了一个新的依赖量表卷积层来绕过调整核时尺寸的问题。在三个核数据集上,我们基准了以下方法:手工制作的,预先训练的重新系统,有监督的重新系统和自我监督的特征。我们表明,所提出的卷积层提高了性能,并且与Barlows-Twins结合使用,与低样本设置中的监督范式相比,该层可以更好地编码核编码,并且胜过所有其他建议的无监督方法。此外,我们将现有的TNBC数据集扩展到合并核类别的注释,以丰富和公开释放一个小样本设置数据集以进行核分割和分类。
translated by 谷歌翻译
捍卫深层神经网络免受对抗性示例是AI安全的关键挑战。为了有效地提高鲁棒性,最近的方法集中在对抗训练中的决策边界附近的重要数据点上。但是,这些方法容易受到自动攻击的影响,这是无参数攻击的合奏,可用于可靠评估。在本文中,我们通过实验研究了其脆弱性的原因,发现现有方法会减少真实标签和其他标签的逻辑之间的利润,同时保持其梯度规范非微小值。减少的边缘和非微小梯度规范会导致其脆弱性,因为最大的logit可以轻松地被扰动翻转。我们的实验还表明,logit边缘的直方图具有两个峰,即小和大的logit边缘。从观察结果来看,我们提出了切换单重损失(SOVR),当数据具有较小的logit rumgins时,它会使用单重损失,从而增加边缘。我们发现,SOVR比现有方法增加了logit的利润率,同时使梯度规范保持较小,并且在针对自动攻击的鲁棒性方面超越了它们。
translated by 谷歌翻译
我们引入了责任感敏感安全性(RSS)的目标延长,这是一种基于规则的自动驾驶系统安全保证(ADS)的方法。制定RSS规则保证目标实现 - 除了原始RSS中的避免碰撞外,还需要进行长时间的操纵序列的复杂计划。为了应对复杂性,我们基于程序逻辑引入了一个构图推理框架,其中可以系统地为较小的子赛车制定RSS规则,并将它们组合起来以获取用于较大场景的RSS规则。作为框架的基础,我们介绍了一个程序逻辑DFHL,可满足连续的动态和安全条件。我们的框架介绍了基于DFHL的工作流程,用于导出目标感知RSS规则;我们也讨论其软件支持。我们在安全体系结构中使用RSS规则进行了实验评估。它的结果表明,目标感知RSS确实有效地实现了避免碰撞和目标实现目标。
translated by 谷歌翻译